72 research outputs found

    An interdisciplinary concept for human-centered explainable artificial intelligence - Investigating the impact of explainable AI on end-users

    Get PDF
    Since the 1950s, Artificial Intelligence (AI) applications have captivated people. However, this fascination has always been accompanied by disillusionment about the limitations of this technology. Today, machine learning methods such as Deep Neural Networks (DNN) are successfully used in various tasks. However, these methods also have limitations: Their complexity makes their decisions no longer comprehensible to humans - they are black-boxes. The research branch of Explainable AI (XAI) has addressed this problem by investigating how to make AI decisions comprehensible. This desire is not new. In the 1970s, developers of intrinsic explainable AI approaches, so-called white-boxes (e.g., rule-based systems), were dealing with AI explanations. Nowadays, with the increased use of AI systems in all areas of life, the design of comprehensible systems has become increasingly important. Developing such systems is part of Human-Centred AI (HCAI) research, which integrates human needs and abilities in the design of AI interfaces. For this, an understanding is needed of how humans perceive XAI and how AI explanations influence the interaction between humans and AI. One of the open questions concerns the investigation of XAI for end-users, i.e., people who have no expertise in AI but interact with such systems or are impacted by the system's decisions. This dissertation investigates the impact of different levels of interactive XAI of white- and black-box AI systems on end-users perceptions. Based on an interdisciplinary concept presented in this work, it is examined how the content, type, and interface of explanations of DNN (black box) and rule-based systems (white box) are perceived by end-users. How XAI influences end-users mental models, trust, self-efficacy, cognitive workload, and emotional state regarding the AI system is the centre of the investigation. At the beginning of the dissertation, general concepts regarding AI, explanations, and psychological constructs of mental models, trust, self-efficacy, cognitive load, and emotions are introduced. Subsequently, related work regarding the design and investigation of XAI for users is presented. This serves as a basis for the concept of a Human-Centered Explainable AI (HC-XAI) presented in this dissertation, which combines an XAI design approach with user evaluations. The author pursues an interdisciplinary approach that integrates knowledge from the research areas of (X)AI, Human-Computer Interaction, and Psychology. Based on this interdisciplinary concept, a five-step approach is derived and applied to illustrative surveys and experiments in the empirical part of this dissertation. To illustrate the first two steps, a persona approach for HC-XAI is presented, and based on that, a template for designing personas is provided. To illustrate the usage of the template, three surveys are presented that ask end-users about their attitudes and expectations towards AI and XAI. The personas generated from the survey data indicate that end-users often lack knowledge of XAI and that their perception of it depends on demographic and personality-related characteristics. Steps three to five deal with the design of XAI for concrete applications. For this, different levels of interactive XAI are presented and investigated in experiments with end-users. For this purpose, two rule-based systems (i.e., white-box) and four systems based on DNN (i.e., black-box) are used. These are applied for three purposes: Cooperation & collaboration, education, and medical decision support. Six user studies were conducted for this purpose, which differed in the interactivity of the XAI system used. The results show that end-users trust and mental models of AI depend strongly on the context of use and the design of the explanation itself. For example, explanations that a virtual agent mediates are shown to promote trust. The content and type of explanations are also perceived differently by users. The studies also show that end-users in different application contexts of XAI feel the desire for interactive explanations. The dissertation concludes with a summary of the scientific contribution, points out limitations of the presented work, and gives an outlook on possible future research topics to integrate explanations into everyday AI systems and thus enable the comprehensible handling of AI for all people.Seit den 1950er Jahren haben Anwendungen der KĂŒnstlichen Intelligenz (KI) die Menschen in ihren Bann gezogen. Diese Faszination wurde jedoch stets von ErnĂŒchterung ĂŒber die Grenzen dieser Technologie begleitet. Heute werden Methoden des maschinellen Lernens wie Deep Neural Networks (DNN) erfolgreich fĂŒr verschiedene Aufgaben eingesetzt. Doch auch diese Methoden haben ihre Grenzen: Durch ihre KomplexitĂ€t sind ihre Entscheidungen fĂŒr den Menschen nicht mehr nachvollziehbar - sie sind Black-Boxes. Der Forschungszweig der ErklĂ€rbaren KI (engl. XAI) hat sich diesem Problem angenommen und untersucht, wie man KI-Entscheidungen nachvollziehbar machen kann. Dieser Wunsch ist nicht neu. In den 1970er Jahren beschĂ€ftigten sich die Entwickler von intrinsisch erklĂ€rbaren KI-AnsĂ€tzen, so genannten White-Boxes (z. B. regelbasierte Systeme), mit KI-ErklĂ€rungen. Heutzutage, mit dem zunehmenden Einsatz von KI-Systemen in allen Lebensbereichen, wird die Gestaltung nachvollziehbarer Systeme immer wichtiger. Die Entwicklung solcher Systeme ist Teil der Menschzentrierten KI (engl. HCAI) Forschung, die menschliche BedĂŒrfnisse und FĂ€higkeiten in die Gestaltung von KI-Schnittstellen integriert. DafĂŒr ist ein VerstĂ€ndnis darĂŒber erforderlich, wie Menschen XAI wahrnehmen und wie KI-ErklĂ€rungen die Interaktion zwischen Mensch und KI beeinflussen. Eine der offenen Fragen betrifft die Untersuchung von XAI fĂŒr Endnutzer, d.h. Menschen, die keine Expertise in KI haben, aber mit solchen Systemen interagieren oder von deren Entscheidungen betroffen sind. In dieser Dissertation wird untersucht, wie sich verschiedene Stufen interaktiver XAI von White- und Black-Box-KI-Systemen auf die Wahrnehmung der Endnutzer auswirken. Basierend auf einem interdisziplinĂ€ren Konzept, das in dieser Arbeit vorgestellt wird, wird untersucht, wie der Inhalt, die Art und die Schnittstelle von ErklĂ€rungen von DNN (Black-Box) und regelbasierten Systemen (White-Box) von Endnutzern wahrgenommen werden. Wie XAI die mentalen Modelle, das Vertrauen, die Selbstwirksamkeit, die kognitive Belastung und den emotionalen Zustand der Endnutzer in Bezug auf das KI-System beeinflusst, steht im Mittelpunkt der Untersuchung. Zu Beginn der Arbeit werden allgemeine Konzepte zu KI, ErklĂ€rungen und psychologische Konstrukte von mentalen Modellen, Vertrauen, Selbstwirksamkeit, kognitiver Belastung und Emotionen vorgestellt. Anschließend werden verwandte Arbeiten bezĂŒglich dem Design und der Untersuchung von XAI fĂŒr Nutzer prĂ€sentiert. Diese dienen als Grundlage fĂŒr das in dieser Dissertation vorgestellte Konzept einer Menschzentrierten ErklĂ€rbaren KI (engl. HC-XAI), das einen XAI-Designansatz mit Nutzerevaluationen kombiniert. Die Autorin verfolgt einen interdisziplinĂ€ren Ansatz, der Wissen aus den Forschungsbereichen (X)AI, Mensch-Computer-Interaktion und Psychologie integriert. Auf der Grundlage dieses interdisziplinĂ€ren Konzepts wird ein fĂŒnfstufiger Ansatz abgeleitet und im empirischen Teil dieser Arbeit auf exemplarische Umfragen und Experimente und angewendet. Zur Veranschaulichung der ersten beiden Schritte wird ein Persona-Ansatz fĂŒr HC-XAI vorgestellt und darauf aufbauend eine Vorlage fĂŒr den Entwurf von Personas bereitgestellt. Um die Verwendung der Vorlage zu veranschaulichen, werden drei Umfragen prĂ€sentiert, in denen Endnutzer zu ihren Einstellungen und Erwartungen gegenĂŒber KI und XAI befragt werden. Die aus den Umfragedaten generierten Personas zeigen, dass es den Endnutzern oft an Wissen ĂŒber XAI mangelt und dass ihre Wahrnehmung dessen von demografischen und persönlichkeitsbezogenen Merkmalen abhĂ€ngt. Die Schritte drei bis fĂŒnf befassen sich mit der Gestaltung von XAI fĂŒr konkrete Anwendungen. Hierzu werden verschiedene Stufen interaktiver XAI vorgestellt und in Experimenten mit Endanwendern untersucht. Zu diesem Zweck werden zwei regelbasierte Systeme (White-Box) und vier auf DNN basierende Systeme (Black-Box) verwendet. Diese werden fĂŒr drei Zwecke eingesetzt: Kooperation & Kollaboration, Bildung und medizinische EntscheidungsunterstĂŒtzung. Hierzu wurden sechs Nutzerstudien durchgefĂŒhrt, die sich in der InteraktivitĂ€t des verwendeten XAI-Systems unterschieden. Die Ergebnisse zeigen, dass das Vertrauen und die mentalen Modelle der Endnutzer in KI stark vom Nutzungskontext und der Gestaltung der ErklĂ€rung selbst abhĂ€ngen. Es hat sich beispielsweise gezeigt, dass ErklĂ€rungen, die von einem virtuellen Agenten vermittelt werden, das Vertrauen fördern. Auch der Inhalt und die Art der ErklĂ€rungen werden von den Nutzern unterschiedlich wahrgenommen. Die Studien zeigen zudem, dass Endnutzer in unterschiedlichen Anwendungskontexten von XAI den Wunsch nach interaktiven ErklĂ€rungen verspĂŒren. Die Dissertation schließt mit einer Zusammenfassung des wissenschaftlichen Beitrags, weist auf Grenzen der vorgestellten Arbeit hin und gibt einen Ausblick auf mögliche zukĂŒnftige Forschungsthemen, um ErklĂ€rungen in alltĂ€gliche KI-Systeme zu integrieren und damit den verstĂ€ndlichen Umgang mit KI fĂŒr alle Menschen zu ermöglichen

    Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps

    Get PDF
    With advances in reinforcement learning (RL), agents are now being developed in high-stakes application domains such as healthcare and transportation. Explaining the behavior of these agents is challenging, as the environments in which they act have large state spaces, and their decision-making can be affected by delayed rewards, making it difficult to analyze their behavior. To address this problem, several approaches have been developed. Some approaches attempt to convey the global\textit{global} behavior of the agent, describing the actions it takes in different states. Other approaches devised local\textit{local} explanations which provide information regarding the agent's decision-making in a particular state. In this paper, we combine global and local explanation methods, and evaluate their joint and separate contributions, providing (to the best of our knowledge) the first user study of combined local and global explanations for RL agents. Specifically, we augment strategy summaries that extract important trajectories of states from simulations of the agent with saliency maps which show what information the agent attends to. Our results show that the choice of what states to include in the summary (global information) strongly affects people's understanding of agents: participants shown summaries that included important states significantly outperformed participants who were presented with agent behavior in a randomly set of chosen world-states. We find mixed results with respect to augmenting demonstrations with saliency maps (local information), as the addition of saliency maps did not significantly improve performance in most cases. However, we do find some evidence that saliency maps can help users better understand what information the agent relies on in its decision making, suggesting avenues for future work that can further improve explanations of RL agents

    What Do End-Users Really Want? Investigation of Human-Centered XAI for Mobile Health Apps

    Full text link
    In healthcare, AI systems support clinicians and patients in diagnosis, treatment, and monitoring, but many systems' poor explainability remains challenging for practical application. Overcoming this barrier is the goal of explainable AI (XAI). However, an explanation can be perceived differently and, thus, not solve the black-box problem for everyone. The domain of Human-Centered AI deals with this problem by adapting AI to users. We present a user-centered persona concept to evaluate XAI and use it to investigate end-users preferences for various explanation styles and contents in a mobile health stress monitoring application. The results of our online survey show that users' demographics and personality, as well as the type of explanation, impact explanation preferences, indicating that these are essential features for XAI design. We subsumed the results in three prototypical user personas: power-, casual-, and privacy-oriented users. Our insights bring an interactive, human-centered XAI closer to practical application

    What do end-users really want? Investigation of human-centered XAI for mobile health apps

    Get PDF
    In healthcare, AI systems support clinicians and patients in diagnosis, treatment, and monitoring, but many systems' poor explainability remains challenging for practical application. Overcoming this barrier is the goal of explainable AI (XAI). However, an explanation can be perceived differently and, thus, not solve the black-box problem for everyone. The domain of Human-Centered AI deals with this problem by adapting AI to users. We present a user-centered persona concept to evaluate XAI and use it to investigate end-users preferences for various explanation styles and contents in a mobile health stress monitoring application. The results of our online survey show that users' demographics and personality, as well as the type of explanation, impact explanation preferences, indicating that these are essential features for XAI design. We subsumed the results in three prototypical user personas: power-, casual-, and privacy-oriented users. Our insights bring an interactive, human-centered XAI closer to practical application

    Do We Need Explainable AI in Companies? Investigation of Challenges, Expectations, and Chances from Employees' Perspective

    Full text link
    By using AI, companies want to improve their business success and innovation chances. However, in doing so, they (companies and their employees) are faced with new requirements. In particular, legal regulations call for transparency and comprehensibility of AI systems. The field of XAI deals with these issues. Currently, the results are mostly obtained in lab studies, while the transfer to real-world applications is lacking. This includes considering employees' needs and attributes, which may differ from end-users in the lab. Therefore, this project report paper provides initial insights into employees' specific needs and attitudes towards (X)AI. For this, the results of a project's online survey are reported that investigate two employees' perspectives (i.e., company level and employee level) on (X)AI to create a holistic view of challenges, risks, and needs of employees. Our findings suggest that AI and XAI are well-known terms perceived as important for employees. This is a first step for XAI to be a potential driver to foster the successful usage of AI by providing transparent and comprehensible insights into AI technologies. To benefit from (X)AI technologies, supportive employees on the management level are valuable catalysts. This work contributes to the ongoing demand for XAI research to develop human-centered and domain-specific XAI designs.Comment: Project repor

    Do we need explainable AI in companies? Investigation of challenges, expectations, and chances from employees' perspective

    Get PDF
    By using AI, companies want to improve their business success and innovation chances. However, in doing so, they (companies and their employees) are faced with new requirements. In particular, legal regulations call for transparency and comprehensibility of AI systems. The field of XAI deals with these issues. Currently, the results are mostly obtained in lab studies, while the transfer to real-world applications is lacking. This includes considering employees' needs and attributes, which may differ from end-users in the lab. Therefore, this project report paper provides initial insights into employees' specific needs and attitudes towards (X)AI. For this, the results of a project's online survey are reported that investigate two employees' perspectives (i.e., company level and employee level) on (X)AI to create a holistic view of challenges, risks, and needs of employees. Our findings suggest that AI and XAI are well-known terms perceived as important for employees. This is a first step for XAI to be a potential driver to foster the successful usage of AI by providing transparent and comprehensible insights into AI technologies. To benefit from (X)AI technologies, supportive employees on the management level are valuable catalysts. This work contributes to the ongoing demand for XAI research to develop human-centered and domain-specific XAI designs.Comment: Project repor

    This is not the Texture you are looking for! Introducing Novel Counterfactual Explanations for Non-Experts using Generative Adversarial Learning

    Get PDF
    With the ongoing rise of machine learning, the need for methods for explaining decisions made by artificial intelligence systems is becoming a more and more important topic. Especially for image classification tasks, many state-of-the-art tools to explain such classifiers rely on visual highlighting of important areas of the input data. Contrary, counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image in a way such that the classifier would have made a different prediction. By doing so, the users of counterfactual explanation systems are equipped with a completely different kind of explanatory information. However, methods for generating realistic counterfactual explanations for image classifiers are still rare. In this work, we present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques. Additionally, we conduct a user study to evaluate our approach in a use case which was inspired by a healthcare scenario. Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems that work with saliency maps, namely LIME and LRP
    • 

    corecore